Provable Security for Physical Cryptography

نویسنده

  • Krzysztof Pietrzak
چکیده

The modern approach to cryptography is provable security, where one defines a meaningful formal security model and proves that schemes are secure in this model. An exception is the design of countermeasures against cryptographic sidechannel attacks, which even today is mostly based on heuristic arguments, which only try to prevent particular attacks. It was long believed that side-channels are a practical problem where theoretical cryptography was only of limited use, but recent results indicate that this view is too pessimistic, and in fact, it is possible to extend the realm of provable security also to side-channel attacks. This survey is a personal and incomplete view on the current state of this exciting and fast moving field. 1 Modern Cryptography For most of history, cryptography was the art of “secret communication”. The designers of encryption schemes were only guided by experience and intuition. Not surprisingly, pretty much all proposed schemes turned out to be insecure. It became evident that the only hope to get secure cryptosystems is by means of provable security, that is 1. to provide a precise and meaningful model capturing what it means to be “secure”. 2. to design systems which can be proven secure in this model. Provable security dates back at least to Shannon’s proof that the one-time pad hides all information about the encrypted message [Sha49], but only with the rise of public-key cryptography [DH76,RSA78,Ell70,Coc73,Wil74], which requires constructions with a rich mathematical structure that can also be exploited by cryptanalysts, did provable security really take off. Modern cryptographic security definitions usually consider a “security game”, which models how a potential adversary can attack the system. Classical examples are the definitions of CPA/CCA secure public-key encryption schemes [GM84,RS92], unforgeability for signatures schemes [GMR88] or pseudorandomness [Yao82,BM84]. More recent notions are security against key-dependent message attacks [CL01,BRS03,HK07,HU08] or security against selective openings [DNRS99,BHY09]. Proving security of a system then equates to showing that no (efficient) adversary can win the security game. Unfortunately, often one cannot hope to prove such a strong statement (as it e.g. would imply P 6 = NP). In this cases one shows that the existence of an adversary who can win the game would imply that some problem generally believed to be hard is actually easy. Public-key cryptosystems can be based on many well studies assumptions, like the hardness of factoring [Rab79,HK09], or the shortest vector problem in lattices ⋆ This survey accompanies a talk with the same title given at the WEWORC’09 workshop. First posted online September 27, 2009. Last update March 9, 2010 [GGH97,Reg05]. Symmetric cryptography (aka. secret-key cryptography) can be based on even much weaker assumptions, e.g. block-ciphers can be built from any one-way function [HILL99,GGM84,LR88], but for efficiency reasons, in practice block-ciphers like DES or AES are actually constructed from scratch and not via reductions ([NR97] is a notable exception). 1.1 Why Black-Box Isn’t Enough What basically all modern security notions have in common, is that the cryptographic algorithm is modelled as a “black-box”, where an adversary can only observe the input/output behavior of the cryptographic algorithm as specified by the security game. Unfortunately, such models do not capture many real world scenarios where an adversary can attack an actual implementation of a cryptosystem which potentially leaks information to the adversary that cannot be learned from black-box access alone. In the mid-90s Kocher demonstrated that the secret-key of the popular RSA cryptosystem can be recovered by simply measuring the time a cryptodevice needs to perform a decryption [Koc96a]. Such attacks, where an adversary exploits leakage of information from a cryptodevice during execution, are called “side-channel” attacks (as opposed to standard cryptanalytic attacks, where the adversary only exploits the “main-channel” – i.e. the legitimate input/output behavior– of the device.) Light-weight crypto devices like smart-cards or RFID chips are particularly susceptible to side-channel attacks, and although [Koc96b] was by no means the first side-channel attack, it came quite as a surprise to the cryptographic community how easily such devices could be broken. Since [Koc96b], many more ingenious side-channel attacks have been published, for example by measuring the power-consumption [KJJ99] or the electromagnetic radiation [QS01,GMO01] of a cryptodevice. Some attacks go beyond simply measuring some physical properties of a device. Cold-boot attacks [HSH08] exploit the fact that memory retains its content for several seconds or even minutes even after being ripped from a laptop. In a probing attack [AKA96], one measures the contents carried by some wires of the circuit which performs a computation (unlike the other attacks, probing attacks require rather elaborate equipment). A particularly intriguing class of attacks are “cache attacks” [OST06,RTSS09] which exploit leakage of information between different processes that run on the same CPU. Such leakage does occur due to the structure of memory caches on modern CPUs. Side-channel attacks are a very real threat for systems used in practice. A recent example is the complete break of the KeeLoq cipher which is used as anti-theft system in millions of cars [EKM08]. Not surprisingly, much research has concentrated on developing countermeasures against such attacks. This research is mostly done by practitioners (i.e., the cryptographic hardware community) who are also active in finding and exploiting new side-channels. It was long believed that theory can only be of limited use to prevent side-channel attacks. But recent results indicate that this view was much too pessimistic, and in fact it is possible to extend the realm of provable security also to side-channel attacks as we will see in this survey. We will only discuss countermeasures against passive attacks, where an adversary only observes leakage form a cryptodevice. In contrast, in an active attack [BDL97,BS91] the adversary does actively tamper with the device, for example by cutting wires in the circuit or by heating or overclocking it in order to introduce random errors to the computation or memory content. Currently, there are only very few results on provable security against active attacks [GLM04,IPSW06,DPW09], but this is likely to change in the near future. 2 Side-Channel Attacks and Countermeasures Countermeasures against side-channel attacks – as outlined above – can be on the hardware or algorithmic level. – On the hardware level, the aim is to construct physical devices which reduce the amount of leakage, for example by shielding the circuit (to avoid electromagnetic radiation) or by inserting transistors (to flatten the power consumption curve). – On the algorithmic level, the aim is to design cryptosystem which remain secure even if some information about the secret internal state is leaked. This is usually done by some kind of internal randomisation (called masking or blinding, cf. [oEE] for a list of relevant papers) in order to avoid the occurrence of predictable intermediate results. As argued in § 1.2 of [FRR10], due to the holographic bound conjecture – which asserts that the information contained in a volume of space is already encoded on the boundary to this region – in theory everything that goes on in a cryptodevice can be learned by measuring its surroundings. Fortunately, in practice we can still hope to get secure systems as (1) a real-world adversary will not even get close to a perfect measurement of that boundary, and (2) even if what the adversary measures contains all of the information about the secret state of the cryptodevice, it might still be computationally hard to extract any useful information (i.e. cryptographic keys) from it. In practice we thus can reasonably assume that a cryptodevice can keep at least some secrets, but it’s unrealistic to assume the other extreme, i.e. that a “useful” device like a smart-card will leak no information at all. Thus to get secure devices, a combination of hardware and algorithmic countermeasures must be in place. 2.1 Provable Security against Side-Channel Attacks? As already mentioned, most of the work on side-channel attacks and countermeasures is done by practitioners, i.e. the cryptographic hardware community, the CHES workshop is their major venue. The “side-channel cryptanalysis lounge” gives a good overview [oEE] on this field. To some extent this research is a cat and mouse game: new side-channels attacks are found, and subsequently countermeasures are proposed. Those are usually ad-hoc, in the sense that they aim at preventing some particular known type of attack and they often come without any formal security proof. Research on side-channel security is quite different from the provable-security approach followed by modern cryptography. For example, in many works on side-channel countermeasures one encounters security arguments involving simulations. Because simulations can only show that some particular countermeasure is secure against some particular attack, they are meaningless in the context of provable security, where one has to quantify over all (time and/or space bounded) adversaries. Clearly, this situation cannot be satisfying from a cryptographic point of view. What are our beautiful provably secure cryptosystems good for, when ultimately their security relies on some ad-hoc countermeasures against side-channel attacks? Despite this, until recently the theory community did not give much attention to this problem. One reason was the perception that side-channels are a practical problem, and theory can only be of limited use to prevent them. It is not obvious what “provable security” should mean in the context of side-channel attacks. We can’t really hope to prove that a physical implementation like a smart card does not leak its entire internal state, thus, we will always assume that the leakage from the physical device can be modelled by some class of leakage functions F , and then prove that under this assumption, the implementation is secure. Several models have been proposed, which differ in how powerful the class F is (we will consider P,AC0,uninvertible functions and more particular classes) and how the leakage functions is applied (i.e. continuous or not, with restrictions on the range and/or domain of the functions). A main difference to the traditional, more applied approach to side-channel countermeasures is that one only restricts the class of leakage functions, but not in which ways an adversary can exploit this leakage, by e.g. only considering template attacks [SMY09,PSP08]. This distinction is crucial, as there is no way to argue why an adversary should restrict herself to running some particular algorithm on the measured leakage. In contrast, limiting the class of leakage functions is meaningful (and to some extent necessary), as the adversary is limited in what leakage she can learn by the physical properties of the cryptodevice and her measurement equipment. What is gained by using provable security as just described? After all, in order to get a secure implementation of a scheme, one still has to construct hardware whose potential leakage is captured by the class F for which this scheme can be proven secure. Security: The main advantage is the security guarantee we can give for the implementation of the scheme. For example consider the case where F contains all efficient functions of bounded range as in the model of leakage-resilience discussed in Section 2.7. It is unlikely that the implementation of a leakage-resilient cryptosystem will turn out be insecure due to a side-channel that was not foreseen by the designer of the device as, no matter what kind of new side-channel attack will be discovered, it can only threaten the implementation if it has a very high “leakage capacity”, that is, the attack must exploit a significant amount of information leaked with every invocation. This contrasts ad-hoc schemes which potentially can be broken by a side-channel attack that exploits only very few (even less than one) bits of information leaked with every invocation. Modularity: Another advantage is the “modularity” of the approach: cryptographers can design schemes which are secure in some precisely defined leakage-model without having to care about any aspects of physical side-channel attacks. On the other hand, engineers can construct hardware whose leakage is captured by the leakage-model without having to understand anything about the actual schemes that will be implemented on it. In particular, once such hardware is in place, it can be used to securely implement any scheme proven secure in the leakage-model considered. 2.2 Physically Observable Cryptography. Micali and Reyzin [MR04] already proposed the elegant and influential framework of “physically observable cryptography”. Unlike the other works we discuss below, the focus of [MR04] is not to construct primitives that are secure against some class of leakage functions F from scratch, but rather on side-channel security preserving reductions. They observe that standard cryptographic reductions will in general fail in the presence of leakage, but in some cases one can still get meaningful results. In particular, they show that the Blum-Micali 1 Sometimes template attacks are referred to as “Bayesian adversaries”, which is somewhat misleading as those are not adversaries in a cryptographic sense (i.e. only resource bounded), but refer to a very specific attack. generator [BM84] is an unpredictable bit-generator (i.e. it outputs bits x1, x2, . . . such that xi+1 looks random after x1, . . . , xi has been computed), assuming the underlying one-way permutation f(.) can be implemented with a very strong security guarantee: for a random z, the output f(z) looks uniformly random given all the leakage that resulted form the computation of f(z). The framework of Micali and Reyzin is based on five axioms which they assume leakage from physical devices to adhere. We will discuss their first, and somewhat controversial “only computation leaks information” axiom later. 2.3 Private Circuits Ishai et al. [ISW03,IPSW06] consider a model where the adversary can choose some wires on a circuit on which the cryptographic algorithm is computed, and then learns the values carried by those wires during the computation. Moreover they consider continuous leakage, that is the adversary can make such a measurement on every invocation of the circuit. What makes their work exceptional is that they were the first to prove how to implement any algorithm secure against an interesting side-channel (i.e. probing attacks). Recently, Faust et al. [FRR10] show that surprisingly, such a general compiler can even be constructed for leakage functions that get all the values carried by all the wires as input, as long as the functions is from a very low complexity class like AC0. This work is particularly interesting as it seems to give the first cryptographic application of unconditional lower bounds for constant depth circuits [H̊as86]. The drawback of those general compilers is that the amount of leakage that can be tolerated is very small: in [ISW03], to tolerate t bits leakage, the circuits must be blown up by a factor of at least t. Although this limits t to rather small values, it is still very meaningful with respect to the attacks considered, i.e. probing attacks, which become very impractical once one has to measure several wires simultaneously. The construction from [FRR10] additionally requires (albeit very simple) completely leakage-proof modules. 2.4 Protecting Storage In some side-channel attacks, most notably cold-boot attacks [HSH08], the adversary learns just a subset of the bits stored in memory. This attack can be successful even if this fraction is rather small, in particular, [HS09] show how to recover an RSA key given just a 0.27 fraction of the key bits. Such attacks can be used to e.g. break disc encryption schemes, where the encryption key is at many times on the memory in the clear (and not password protected as on the hard disk). 2 Formally, Ishai et al. prove the following: let t ≥ 0 be some constant and let [X] denote a (t+ 1) out of (t+1) secret sharing of the value X. They construct a general compiler, which turns every circuit G(.) into a circuit Gt(.) (of size O(t|G|)) such that [G(X)] = Gt([X]) for all inputs X, and moreover one does not learn any information on G(X) even when given the value carried by any t wires in the circuit Gt(.) while evaluating the input [X]. This transformation uses multiparty-computation, which is quite different from all other approaches we discuss here. 3 It is an interesting open problem if already the construction from [ISW03] is secure against leakage from low complexity classes. 4 How large this fraction is depends on the physical properties of the memory and on how fast this attack can be launched (i.e. how long the memory is without power.) Assuming the fraction of bits learned by the adversary is less than 1, one can to some extent protect against such attacks by keeping a (randomized) encoding f(s) on the memory (whenever s is not used), where the encoding guarantees that s remains secret, even if a subset of the bits of f(s) is leaked. All-or-nothing transforms [Riv97], t-resilient functions [CGH85] and exposure-resilient functions [CDH00,DSS01] achieve exactly this. Recently Dav̀ı and Dziembowski [DD09] consider the general problem of “leakage-resilient storage”, in particular, they show a probabilistic encoding Enc such that leakage f(c) of a codeword c = Enc(s) contains almost no information about s assuming only that (1) the range of f is bounded and (2) f can be computed by circuits of small size. The solutions discussed above can only protect against a cold-boot attack if one can be sure that the secret(s) is encoded at the time-point when the memory is removed. Next we discuss particular cryptosystems which remain secure in the much more hostile setting where the adversary can leak any bounded-range function of the key. Note that in this setting it is unavoidable that information about a key is leaked, thus the best one can hope for is that the particular cryptosystem using this key remains secure even after leakage. 2.5 Security Against Memory Attacks Akavia et al. [AGV09] define the security-notion of “security against memory attacks”. The idea is to consider a standard security notion, but give the adversary some extra power: she can initially chose any efficient leakage function f : {0, 1} → {0, 1} and gets the leakage f(sk). Here n is the length of the secret key sk and c is some parameter. Clearly, one must require c < 1 as otherwise f could just output the entire sk. Also stronger notions have been considered [KV09,NS09], where the leakage function does not only get the secret key as input, but the entire randomness which was used (by some key-generation algorithm) to sample it and/or additional state information like the randomness used by a signature scheme. Akavia et al. [AGV09] show that the public-key encryption scheme of Regev [Reg05] and the identity based scheme of Gentry et al. [GPV08] are secure against memory attacks under basically the same hardness assumptions (about lattices) as the original schemes. Naor and Segev [NS09] show that any hash-proof system [CS03,KD04,KPSY09], initially introduced by Cramer-Shoup to construct chosen-ciphertext secure public-key encryptions schemes, can be used in a straight forward way to get public-key encryption secure against memory attacks. They achieve CPA/CCA1 and CCA2 security for leakage c = 1− ǫ, c = 1/4− ǫ and c = 1/6− ǫ for any ǫ > 0 respectively (where c is the parameter as in the first paragraph of this section.) Dodis et al. [ADW09] and Katz and Vaikuntanathan [KV09] construct signature schemes which are secure against memory attacks. In particular, they show that many signature schemes (like Okamoto or Schnorr) derived via the Fiat-Shamir [FS87] transformation are secure against memory-attacks. This schemes can leak even a fraction c = 1 − ǫ, but due to the Fiat-Shamir transform, the above scheme can only be proven secure in the random oracle model. [KV09] also propose a scheme in the standard model, but which is inefficient as it uses NIZK proofs, and an efficient one-time signature scheme in the standard model. 5 A functions f(.) is t-resilient if any t bits of f(s) are statistically independent of s. Exposure resilient functions are defined similarly, but using a relaxed security notion (statistical instead of perfect independence), which allows to leak larger t, i.e. t can be a 1 − ǫ fraction of all bits, as opposed to less than 1/2 which is the best one can get for t-resilient functions as proven in [CGH85]. History recap on bounded storage/retrieval/leakage models. Ideally, a cryptographic scheme should be secure against any adversary, that is secure in an information theoretical sense. Unfortunately, there are inherent limitations to what can be achieved information theoretically. Already Shannon proved that the one-time pad is basically optimal: to encrypt a message of length n, the sender and receiver must share a key of length at least n. To overcome thin bound, one can put some reasonable assumptions about the power of an adversary. Usually, one assumes that the adversary is time bounded (e.g. can be modelled as a Turing machine running in polynomial time.) Maurer [Mau90] proposes a completely different approach, the “bounded storage model”, where one assumes only a bound on the storage of the adversary. Informally, one assumes a (huge) secret R is available for some short time to all parties, but the adversary can only save the output f(R) of a (not necessarily efficient) compressing function f , say, |f(R)| = |R|/2. Subsequently, the bounded-retrieval model (BRM) was proposed independently by Dziembowski and Di Crescenzo et al. [Dzi06,DLW06]. Here the users have large (2GB, say) secret keys, which are subject to large amount (1GB, say) of adversarial leakage (unlike in the bounded memory model, there is no other “short” secret the adversary has no access to). The BRM model for examples captures a setting where we want to protect cryptographic keys even on a computer which is infected by malware (a virus or Trojan) that can perform any computation on the computer, but can only communicate a bounded amount of information (like 1GB) back to the bad guys. In this model symmetric authentication schemes [Dzi06,DLW06,CDD07], password authentication [DLW06] and secret-sharing [DP07] were constructed. Recently the first public-key primitives in the BRM model were constructed by Alwen at al.[ADW09]. The setting of memory attacks discussed before is basically the BRM model, but ignoring all the issues that additionally come up when using huge keys while requiring the computations of the honest parties to be efficient.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Fundamental problems in provable security and cryptography

This paper examines methods for formally proving the security of cryptographic schemes. We show that, despite many years of active research and dozens of significant results, there are fundamental problems which have yet to be solved. We also present a new approach to one of the more controversial aspects of provable security, the random oracle model.

متن کامل

Provable Security in Cryptography

These lecture notes are a compilation of some of my readings while I was preparing two lectures given at EPFL on provable security in cryptography. They are essentially based on a book chapter from David Pointcheval called “Provable Security for Public Key Schemes” [24], on Victor Shoup’s tutorial on game playing techniques [30], on Coron’s Crypto’00 paper on the exact security of the Full Doma...

متن کامل

Contemporary Cryptology Provable Security for Public Key Schemes

Since the appearance of public-key cryptography in the Diffie-Hellman seminal paper, many schemes have been proposed, but many have been broken. Indeed, for a long time, the simple fact that a cryptographic algorithm had withstood cryptanalytic attacks for several years was considered as a kind of validation. But some schemes took a long time before being widely studied, and maybe thereafter be...

متن کامل

Probabilistic Relational Hoare Logics for Computer-Aided Security Proofs

The provable security paradigm originates from the work of Goldwasser and Micali [10] and plays a central role in modern cryptography. Since its inception, the focus of provable security has gradually shifted towards practice-oriented provable security [4]. The central goal of practice-oriented provable security is to develop and analyze efficient cryptographic systems that can be used for prac...

متن کامل

On Provable Security of Cryptographic Schemes

Provable security is an important issue in modern cryptography because it satisfies the security of the encryption schemes in a theoretical way via a reduction method. To prove the security of a cryptographic scheme, it is necessarry to define the goals and the capabilities of the adversary. In this paper, we explain security models in terms of the adversarial goals and the adversarial capabili...

متن کامل

Security of Device-Independent Quantum Key Distribution Protocols

Device-independent cryptography represent the strongest form of physical security: it is based on general physical laws and does not require any detailed knowledge or control of the physical devices used in the protocol. We discuss a general security proof valid for a large class of device-independent quantum key distribution protocols. The proof relies on the validity of Quantum Theory and req...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010